Intel / Information Warfare
OSINT intel briefs, structured summaries, and trend signals. Topic: Information-Warfare. Updated briefs and structured summaries from curated sources.
AI and disinformation – How can Europe safeguard trust in the media?
Full timeline
0.0–300.0
Artificial intelligence is significantly influencing societal interactions and media trust. The AI for Trust project, funded by Horizon Europe, aims to combat disinformation through collaborative efforts.
- Artificial intelligence is reshaping societal interactions with information. It presents both opportunities and challenges in the media landscape
- The event focuses on how Europe can maintain trust in media amidst the rise of AI and disinformation
- Jennifer Baker serves as the moderator, guiding the discussion and facilitating audience questions through the Slido tool
- The AI Act and related initiatives aim to address the risks associated with AI technologies. This includes the AI content action plan
- The AI for Trust project, funded by Horizon Europe, is a collaborative effort to combat disinformation using AI
- A senior official provides opening remarks and reflects on the projects three-year journey
300.0–600.0
The AI for Trust project aims to create an online platform that collects and processes data from social and news media using advanced AI tools. This initiative seeks to provide insights on disinformation to various stakeholders, enhancing the understanding of the European media landscape.
- The AI for Trust project aimed to create an online platform that collects data from social and news media. It processes this data with advanced AI tools
- This platform provides disinformation insights to fact-checkers, journalists, policymakers, and researchers. It enhances the understanding of the European media landscape
- Collaboration among project partners was crucial. They developed innovative technology and research while maintaining a cooperative spirit throughout the projects duration
- The project faced challenges due to rapid developments in AI. This was particularly evident following significant events like the acquisition of Twitter and the public release of ChatGPT
- A shared technological infrastructure and data space were deemed necessary. This was to address common challenges faced by various projects tackling disinformation
- The efforts of the AI for Trust project contribute to building a shield for European democracy. This was a key topic in their recent collaborative event in Brussels
600.0–900.0
AI is a powerful tool that can both threaten and provide solutions in the fight against disinformation, particularly during elections. The financial burden of producing quality news complicates the maintenance of a healthy information ecosystem.
- AI is a powerful tool that can pose threats and offer solutions in the fight against disinformation. It is essential to harness AI effectively to protect democracy and maintain information integrity
- Misinformation and disinformation can have severe consequences, especially during elections. The impact of misleading information on democratic processes is a critical concern that must be addressed
- The challenges facing information integrity are compounded by the high costs of producing quality news and verified knowledge. This financial burden makes it difficult to sustain a healthy information ecosystem
- AI facilitates the rapid and cost-effective creation and dissemination of information. This alters how people process and perceive content, potentially leading to an erosion of shared facts and realities
- Generative AI is increasingly used to deceive and influence public perception through targeted messaging. This creates a predatory environment where democracy is at risk from powerful economic actors
- Trust in technology companies and government institutions is fragile and can be easily lost. The intertwining of trust in technology and government complicates the regulatory landscape for democracies
900.0–1200.0
The European Union faces increasing pressure to uphold the Brussels effect amid challenges posed by disinformation and the influence of large technology companies. AI tools present both opportunities and risks in combating disinformation, necessitating a focus on transparency and accountability in their application.
- The pressure on the European Union to maintain the Brussels effect is increasing. This is especially true in the context of disinformation and the influence of large technology companies
- Media literacy alone is not a complete solution to the challenges posed by disinformation. Many individuals remain unaware of the predatory nature of the information environment
- Disinformation is complex and often not illegal, which makes it difficult to address through policy. AI tools can both create and combat disinformation, complicating the response
- Platforms currently use AI to detect disinformation. However, the focus should be on ensuring transparency and accountability in how these tools are applied
- The consequences of flagging misleading posts must be carefully considered. Disinformation can range from harmless to potentially harmful, especially in influencing elections
- A comprehensive policy agenda should prioritize making platforms responsible for their use of AI in content moderation. It should also address the systemic risks associated with disinformation
1200.0–1500.0
Disinformation is a multifaceted issue that requires a comprehensive approach beyond simply prohibiting misleading content. Effective tools must empower individuals and organizations to combat misinformation while fostering democratic participation and integrity in online communication.
- Disinformation is a complex issue that cannot simply be prohibited or ignored. It often exists in a gray area where misleading content may not be illegal but remains problematic
- AI tools can facilitate the spread of disinformation and help combat it. The challenge lies in ensuring these tools are used transparently and accountably to address disinformation effectively
- Creating a healthier information environment requires a broader approach than tackling individual instances of disinformation. It involves building systems that support democratic participation and strengthen the integrity of online communication
- Fact-checking is essential in the current landscape, but it is not sufficient on its own. A more comprehensive strategy is needed to address the systemic issues affecting information environments
- Tools developed to combat disinformation should empower individuals and organizations rather than rely solely on large technology companies. Citizen fact-checkers and journalists need accessible resources to counter misinformation effectively
- Technical solutions must be user-friendly while providing powerful functionalities. These tools should evolve to keep pace with the changing strategies of disinformation campaigns
1500.0–1800.0
Artificial intelligence is increasingly being utilized to spread disinformation, significantly threatening journalism and media integrity. The efficiency and low cost of AI tools make them essential for combating disinformation while also raising concerns about their potential misuse.
- Artificial intelligence is being used to spread disinformation, posing a significant threat to journalism and media integrity. For instance, AI-generated newspapers can produce thousands of articles tailored to disinformation agendas
- Deepfakes have become increasingly sophisticated. A recent incident during the Irish elections involved a realistic deepfake that falsely claimed a leading candidate was withdrawing just before the vote
- The efficiency and low cost of AI tools make them essential in combating disinformation. Leveraging AIs capabilities can help counteract the threats posed by disinformation campaigns
- AI can assist in enforcing existing legislation, such as the Digital Services Act. This act requires platforms to mitigate risks associated with disinformation and provide more transparency about the content circulating on their platforms
- Practical tools like fact-checking are essential for compliance with regulatory frameworks. The code of conduct on disinformation emphasizes the importance of fact-checking obligations for platforms
- Projects under the Horizon program aim to enhance fact-checking efficiency and detect manipulative behavior. These initiatives focus on using AI to analyze polarization in debates across member states
1800.0–2100.0
AI tools are being developed to combat disinformation, but there are concerns that technology may exacerbate existing problems. Building transparency and trust with audiences is crucial for ensuring the integrity of information, especially in the context of the upcoming 2024 elections.
- AI tools are being developed to combat disinformation. However, there are concerns that using technology against technology may escalate existing problems
- Building transparency and trust with audiences is crucial. Users need to understand how information is presented on various platforms and networks
- The 2024 elections are viewed as a significant moment for AI and democracy. Deep fakes present challenges but also encourage citizen engagement with trusted sources
- Media literacy is essential for individuals to critically assess information. Disinformation can become harmful when it reaches large audiences
- Regulatory measures, such as the Digital Services Act, are necessary. These measures ensure that platforms act in the public interest rather than solely for profit
- Algorithms on social media often prioritize sensationalist content. This can lead to a distorted information landscape that undermines democracy
2100.0–2400.0
Europe is grappling with the challenge of leveraging artificial intelligence in information technology while safeguarding fundamental rights. The existing regulatory framework, including the Digital Services Act and the AI Act, aims to enhance the response to disinformation and uphold democratic values.
- Europe faces challenges in leveraging artificial intelligence for information technology while ensuring that fundamental rights are not compromised. The question of whether Europe is falling behind in this area remains complex and unresolved
- A comprehensive regulatory framework exists in Europe, including the Digital Services Act and the AI Act. These regulations aim to strengthen the response to disinformation and protect democratic values
- Denmark advocates for the use of copyright law to protect individuals from deepfakes. This includes safeguarding personal characteristics such as appearance and voice, which is crucial in the digital landscape
- Media literacy and digital literacy are essential for combating disinformation. However, declining basic literacy levels globally exacerbate the challenges in understanding and navigating information
- Big tech firms hold significant power in the media landscape, leading to market concentration. This concentration provides advantages for these companies in the artificial intelligence market, raising concerns about media control
- Europe is in a unique position to question which technologies align with its values. The ongoing investigation into a major tech companys potential breach of the Digital Markets Act reflects the need for accountability in tech practices
2400.0–2700.0
Social media platforms require a robust regulatory framework, including the Digital Services Act, Digital Markets Act, and AI Act, to enhance accountability. Additionally, fostering media literacy and utilizing AI tools for fact-checking are essential for combating disinformation and ensuring electoral integrity.
- Social media platforms need to be held more accountable. This requires a comprehensive regulatory framework that includes the Digital Services Act, Digital Markets Act, and AI Act
- Implementing these regulations is crucial. However, it is equally important to approach enforcement carefully to avoid simply imposing fines on technology companies
- In addition to regulatory measures, voluntary frameworks like the code of conduct on disinformation can help platforms collaborate with non-governmental organizations to enhance electoral integrity
- Media literacy is essential for building citizen resilience against manipulation. Individuals must critically engage with the information they encounter online
- AI-based tools can significantly enhance human fact-checking efforts. They monitor various sources and support decision-making in verifying news items
- These tools can assist in detecting manipulated media, such as altered videos or audio. This makes it easier for fact-checkers to identify misinformation
2700.0–3000.0
AI tools are being developed to enhance the capabilities of human fact-checkers, allowing for a more efficient approach to combating disinformation. The integration of advanced tools for professionals and simpler tools for the general public is essential for empowering citizens to verify information effectively.
- AI tools are essential for enhancing the capabilities of human fact-checkers. These tools can monitor various sources and assist in analyzing news items effectively
- Integrating AI with human expertise allows for a more efficient approach to combating disinformation. This collaboration helps identify manipulated videos and audio content
- There is a need for advanced tools for trained professionals and simpler tools for the general public. This dual approach empowers citizens to verify information quickly and effectively
- Media literacy is crucial for young people who spend significant time on social media. Teaching them to use AI tools helps them critically assess the information they encounter online
- The accountability of social media platforms depends on compliance with regulatory obligations. Meaningful changes to algorithms and policies are necessary to ensure platforms fulfill their responsibilities
- Transparency in AI platforms regarding information sources is important but not a complete solution. Users must remain critical of the content they consume, even with increased transparency
3000.0–3300.0
Different tools are necessary to address the diverse needs of fact-checkers, journalists, and citizens. Collaboration among researchers, media professionals, and policymakers is crucial for developing resilient tools to combat disinformation.
- Different tools are necessary to address the diverse needs of fact-checkers, journalists, and citizens. Each group requires tailored solutions to effectively combat disinformation in their contexts
- User-friendly tools are essential for widespread adoption among media professionals and the public. Ensuring accessibility will enhance their effectiveness in promoting media literacy
- Collaboration among researchers, media professionals, and policymakers is crucial for developing resilient tools. Engaging all stakeholders will help create a supportive ecosystem to combat disinformation
- The European Fact-Checking Standards Network has initiated a project that utilizes AI for pre-bunking disinformation. This proactive approach aims to address misinformation before it spreads, especially during elections
- Concerns about AI-driven targeted disinformation raise questions about potential bans on certain applications during elections. Balancing creative expression with the risks of disinformation presents a complex challenge
- Strengthening community ties is a key strategy for fostering healthier online information ecosystems. Rather than banning specific tools, the focus should be on enhancing connections among users to combat misinformation
3300.0–3600.0
Banning certain AI applications during elections raises political challenges and may restrict creative expression. A focus on healthier online ecosystems is preferred over outright bans, emphasizing the need for safe tools and equitable access.
- Banning certain AI applications during election periods raises political challenges. It may restrict creative expression and interaction among users. A focus on healthier online ecosystems is preferred over outright bans
- The controversy surrounding the GROC and X platforms illustrates the complexities of AI tools tied to social networks. These tools can be harmful, particularly to women and girls, highlighting the need for careful consideration of their use
- Limiting access to AI tools based on payment raises questions about equity in democratic processes. The focus should be on who can afford these tools rather than which tools are permissible
- Ensuring the safety of tools available during elections is crucial. The DSE election guidelines emphasize risk mitigation, leading to the deactivation of chatbots that could not provide reliable election-related information
- The Center for Democratic Resilience aims to foster collaboration among member states and the commission. This initiative seeks to create strong joint responses to disinformation and enhance interdisciplinary cooperation
- New projects like the European Network of Fact Checkers aim to improve collaboration among fact checkers. Establishing a repository of fact checks can enhance the effectiveness of independent verification efforts
3600.0–3900.0
Collaboration among various stakeholders is essential for developing effective tools to combat disinformation. Engaging end users in the project helps tailor tools to their workflows, making them more effective.
- Collaboration among various stakeholders is essential for developing effective tools to combat disinformation. Engaging different competencies from companies and universities enhances the understanding of user needs
- Involving end users in the project helps tailor tools to their workflows, making them more effective. This hybrid approach combines human insight with technological solutions, which is crucial for success
- Regular meetings and ongoing collaboration are necessary to adapt to evolving information strategies and user needs. Continuous engagement ensures that tools remain relevant and effective in addressing disinformation
- AI fact-checking tools can potentially lead to user passivity, but they also encourage critical thinking. Media literacy initiatives are vital for raising awareness and promoting active engagement with content
- Surveys indicate that users prefer trusted news sources over AI tools for verifying information. This suggests a need to build trust in both traditional media and emerging technologies
- Accountability for companies in the disinformation landscape is a pressing concern. Ensuring that organizations take responsibility for their role in spreading misinformation is crucial for maintaining media integrity
3900.0–4200.0
The discussion highlights the need for extending public service regulation to digital information providers, including AI platforms, as younger audiences increasingly rely on influencers for news. Balancing regulation is crucial, with ongoing debates about the extent of necessary oversight in the evolving media landscape.
- State-owned or EU-controlled options may be necessary if companies do not fully comply with transparency regulations. This raises questions about the effectiveness of current digital information providers
- The notion of public service and regulation should extend beyond traditional media. It must include digital information providers and AI platforms, especially as younger audiences increasingly rely on influencers for news
- Finding the right balance in regulation is crucial. Some argue that the European Commission regulates too much, while others believe more regulation is needed. The evolving nature of media services must be considered
- The European Commission is exploring solutions to foster competitive social media platforms. However, it does not intend to create its own social media service. Instead, it aims to leverage existing strengths in public service media
- Civic tech experiments and distributed technologies like Mastodon offer promising alternatives for information sharing. These approaches can enhance resilience and improve information circulation across Europe
- Collaboration among tech companies is essential for strengthening the information environment. This is especially important regarding election integrity. Sharing best practices can help improve the overall quality of information available to users
- The debate on whether to implement ex-ante regulation or ex-post enforcement remains unresolved. Providing users with better tools to navigate the online environment is crucial for safeguarding free speech in European democracy
4200.0–4500.0
Disinformation is often not illegal, complicating efforts to combat it, and highlights the need for accountability from platforms. Collaboration among stakeholders is essential for creating a healthier information ecosystem and improving AI tools.
- Anti-regulation and post-enforcement are critical topics in addressing disinformation. It is essential to understand the ecosystem and identify effective entry points for intervention
- Disinformation is often not illegal, complicating efforts to combat it. The focus should be on how harm occurs and how to hold platforms accountable for their practices
- Collaboration among stakeholders is vital for creating a healthier information ecosystem. Engaging regulators, civil society, and media literacy specialists can lead to more effective solutions
- European initiatives like the AI Act and the democracy action plan are important, but more efforts are needed. Continuous interaction with end users will help improve AI systems and tools
- An unequal balance of power exists regarding knowledge about online platforms and AI systems. Prioritizing research and evidence is essential to design effective solutions for these challenges
- Healthy information ecosystems require strengthening various components, including legacy organizations. National online resources can serve as bridges between universities, journalists, and the public for fact-checking
4500.0–4800.0
Trust in media is a significant concern, with a strong community in Europe committed to addressing disinformation challenges. Collaboration among regulators, platforms, and civil society is essential for creating effective solutions.
- Trust in media is currently a significant concern. While challenges are immense, there is a strong community in Europe committed to addressing these issues
- Collaboration among various stakeholders is essential. No single entity, whether a regulator, platform, or civil society, can tackle disinformation alone
- The importance of regulation working alongside technology and civil society efforts is emphasized. These elements must be integrated to create effective solutions
- There is a growing interest in the topic of disinformation. Many individuals are eager to engage, propose ideas, and seek answers to pressing questions
- Despite numerous questions that could not be addressed during the discussion, the conversation about disinformation and media trust will continue in the future
- Participants are encouraged to stay informed about upcoming debates and discussions. Community engagement is crucial in tackling these challenges